Deploying reliable deep learning techniques in interdisciplinary applications needs learned models to output accurate and ({even more importantly}) explainable predictions. Existing approaches typically explicate network outputs in a post-hoc fashion, under an implicit assumption that faithful explanations come from accurate predictions/classifications. We have an opposite claim that explanations boost (or even determine) classification. That is, end-to-end learning of explanation factors to augment discriminative representation extraction could be a more intuitive strategy to inversely assure fine-grained explainability, e.g., in those neuroimaging and neuroscience studies with high-dimensional data containing noisy, redundant, and task-irrelevant information. In this paper, we propose such an explainable geometric deep network dubbed as NeuroExplainer, with applications to uncover altered infant cortical development patterns associated with preterm birth. Given fundamental cortical attributes as network input, our NeuroExplainer adopts a hierarchical attention-decoding framework to learn fine-grained attentions and respective discriminative representations to accurately recognize preterm infants from term-born infants at term-equivalent age. NeuroExplainer learns the hierarchical attention-decoding modules under subject-level weak supervision coupled with targeted regularizers deduced from domain knowledge regarding brain development. These prior-guided constraints implicitly maximizes the explainability metrics (i.e., fidelity, sparsity, and stability) in network training, driving the learned network to output detailed explanations and accurate classifications. Experimental results on the public dHCP benchmark suggest that NeuroExplainer led to quantitatively reliable explanation results that are qualitatively consistent with representative neuroimaging studies.
translated by 谷歌翻译
Seismic data often undergoes severe noise due to environmental factors, which seriously affects subsequent applications. Traditional hand-crafted denoisers such as filters and regularizations utilize interpretable domain knowledge to design generalizable denoising techniques, while their representation capacities may be inferior to deep learning denoisers, which can learn complex and representative denoising mappings from abundant training pairs. However, due to the scarcity of high-quality training pairs, deep learning denoisers may sustain some generalization issues over various scenarios. In this work, we propose a self-supervised method that combines the capacities of deep denoiser and the generalization abilities of hand-crafted regularization for seismic data random noise attenuation. Specifically, we leverage the Self2Self (S2S) learning framework with a trace-wise masking strategy for seismic data denoising by solely using the observed noisy data. Parallelly, we suggest the weighted total variation (WTV) to further capture the horizontal local smooth structure of seismic data. Our method, dubbed as S2S-WTV, enjoys both high representation abilities brought from the self-supervised deep network and good generalization abilities of the hand-crafted WTV regularizer and the self-supervised nature. Therefore, our method can more effectively and stably remove the random noise and preserve the details and edges of the clean signal. To tackle the S2S-WTV optimization model, we introduce an alternating direction multiplier method (ADMM)-based algorithm. Extensive experiments on synthetic and field noisy seismic data demonstrate the effectiveness of our method as compared with state-of-the-art traditional and deep learning-based seismic data denoising methods.
translated by 谷歌翻译
During X-ray computed tomography (CT) scanning, metallic implants carrying with patients often lead to adverse artifacts in the captured CT images and then impair the clinical treatment. Against this metal artifact reduction (MAR) task, the existing deep-learning-based methods have gained promising reconstruction performance. Nevertheless, there is still some room for further improvement of MAR performance and generalization ability, since some important prior knowledge underlying this specific task has not been fully exploited. Hereby, in this paper, we carefully analyze the characteristics of metal artifacts and propose an orientation-shared convolution representation strategy to adapt the physical prior structures of artifacts, i.e., rotationally symmetrical streaking patterns. The proposed method rationally adopts Fourier-series-expansion-based filter parametrization in artifact modeling, which can better separate artifacts from anatomical tissues and boost the model generalizability. Comprehensive experiments executed on synthesized and clinical datasets show the superiority of our method in detail preservation beyond the current representative MAR methods. Code will be available at \url{https://github.com/hongwang01/OSCNet}
translated by 谷歌翻译
Since higher-order tensors are naturally suitable for representing multi-dimensional data in real-world, e.g., color images and videos, low-rank tensor representation has become one of the emerging areas in machine learning and computer vision. However, classical low-rank tensor representations can only represent data on finite meshgrid due to their intrinsical discrete nature, which hinders their potential applicability in many scenarios beyond meshgrid. To break this barrier, we propose a low-rank tensor function representation (LRTFR), which can continuously represent data beyond meshgrid with infinite resolution. Specifically, the suggested tensor function, which maps an arbitrary coordinate to the corresponding value, can continuously represent data in an infinite real space. Parallel to discrete tensors, we develop two fundamental concepts for tensor functions, i.e., the tensor function rank and low-rank tensor function factorization. We theoretically justify that both low-rank and smooth regularizations are harmoniously unified in the LRTFR, which leads to high effectiveness and efficiency for data continuous representation. Extensive multi-dimensional data recovery applications arising from image processing (image inpainting and denoising), machine learning (hyperparameter optimization), and computer graphics (point cloud upsampling) substantiate the superiority and versatility of our method as compared with state-of-the-art methods. Especially, the experiments beyond the original meshgrid resolution (hyperparameter optimization) or even beyond meshgrid (point cloud upsampling) validate the favorable performances of our method for continuous representation.
translated by 谷歌翻译
尽管目前基于深度学习的方法在盲目的单图像超分辨率(SISR)任务中已获得了有希望的表现,但其中大多数主要集中在启发式上构建多样化的网络体系结构,并更少强调对Blur之间的物理发电机制的明确嵌入内核和高分辨率(HR)图像。为了减轻这个问题,我们提出了一个模型驱动的深神经网络,称为blind SISR。具体而言,为了解决经典的SISR模型,我们提出了一种简单的效果迭代算法。然后,通过将所涉及的迭代步骤展开到相应的网络模块中,我们自然构建了KXNET。所提出的KXNET的主要特异性是整个学习过程与此SISR任务的固有物理机制完全合理地集成在一起。因此,学习的模糊内核具有清晰的物理模式,并且模糊内核和HR图像之间的相互迭代过程可以很好地指导KXNET沿正确的方向发展。关于合成和真实数据的广泛实验很好地证明了我们方法的卓越准确性和一般性超出了当前代表性的最先进的盲目SISR方法。代码可在:\ url {https://github.com/jiahong-fu/kxnet}中获得。
translated by 谷歌翻译
伪标记已被证明是一种有希望的半监督学习(SSL)范式。现有的伪标记方法通常假定培训数据的类别分布是平衡的。但是,这种假设远非现实的场景,现有的伪标记方法在班级不平衡的背景下遭受了严重的性能变性。在这项工作中,我们在半监督设置下研究伪标记。核心思想是使用偏置自适应分类器自动吸收由班级失衡引起的训练偏差,该分类器将原始线性分类器与偏置吸引子配合使用。偏置吸引子设计为适应训练偏见的轻巧残留网络。具体而言,通过双级学习框架来学习偏见吸引子,以便偏见自适应分类器能够符合不平衡的训练数据,而线性分类器可以为每个类提供无偏的标签预测。我们在各种不平衡的半监督设置下进行了广泛的实验,结果表明我们的方法可以适用于不同的伪标记模型,并且优于先前的艺术。
translated by 谷歌翻译
受深神经网络的巨大成功的启发,基于学习的方法在计算机断层扫描(CT)图像中获得了有希望的金属伪像(MAR)的表现。但是,大多数现有方法更加强调建模并嵌入本特定MAR任务的内在先验知识中,将其纳入其网络设计中。在这个问题上,我们提出了一个自适应卷积词典网络(ACDNET),该网络利用基于模型的方法和基于学习的方法。具体而言,我们探讨了金属伪像的先前结构,例如非本地重复条纹模式,并将其编码为显式加权卷积词典模型。然后,仔细设计了一种简单的算法来解决模型。通过将所提出算法的每个迭代取代展开到网络模块中,我们将先前的结构明确嵌入到深网中,\ emph {i.e。,}对MAR任务的明确解释性。此外,我们的ACDNET可以通过训练数据自动学习无伪影CT图像的先验,并根据其内容自适应地调整每个输入CT图像的表示内核。因此,我们的方法继承了基于模型的方法的明确解释性,并保持了基于学习的方法的强大表示能力。在合成和临床数据集上执行的综合实验表明,在有效性和模型概括方面,我们的ACDNET的优越性。 {\ color {blue} {{\ textIt {代码可在{\ url {https://github.com/hongwang01/acdnet}}}}}}}}}}}}}}}}
translated by 谷歌翻译
It is known that the decomposition in low-rank and sparse matrices (\textbf{L+S} for short) can be achieved by several Robust PCA techniques. Besides the low rankness, the local smoothness (\textbf{LSS}) is a vitally essential prior for many real-world matrix data such as hyperspectral images and surveillance videos, which makes such matrices have low-rankness and local smoothness properties at the same time. This poses an interesting question: Can we make a matrix decomposition in terms of \textbf{L\&LSS +S } form exactly? To address this issue, we propose in this paper a new RPCA model based on three-dimensional correlated total variation regularization (3DCTV-RPCA for short) by fully exploiting and encoding the prior expression underlying such joint low-rank and local smoothness matrices. Specifically, using a modification of Golfing scheme, we prove that under some mild assumptions, the proposed 3DCTV-RPCA model can decompose both components exactly, which should be the first theoretical guarantee among all such related methods combining low rankness and local smoothness. In addition, by utilizing Fast Fourier Transform (FFT), we propose an efficient ADMM algorithm with a solid convergence guarantee for solving the resulting optimization problem. Finally, a series of experiments on both simulations and real applications are carried out to demonstrate the general validity of the proposed 3DCTV-RPCA model.
translated by 谷歌翻译
持续学习需要模型来学习新任务,同时保持先前学识到的知识。已经提出了各种算法来解决这一真正的挑战。到目前为止,基于排练的方法,例如经验重播,取得了最先进的性能。这些方法将过去任务的一小部分保存为内存缓冲区,以防止模型忘记以前学识的知识。但是,它们中的大多数情况都同样对待每一个新任务,即,在学习不同的新任务时修复了框架的超级参数。这样的设置缺乏对过去和新任务之间的关系/相似性的考虑。例如,与从公共汽车中学到的人相比,从狗的知识/特征比识别猫(新任务)更有益。在这方面,我们提出了一种基于BI级优化的元学习算法,以便自适应地调整从过去和新任务中提取的知识之间的关系。因此,该模型可以在持续学习期间找到适当的梯度方向,避免在内存缓冲区上的严重过度拟合问题。广泛的实验是在三个公开的数据集(即CiFar-10,CiFar-100和微小想象网)上进行的。实验结果表明,该方法可以一致地改善所有基线的性能。
translated by 谷歌翻译
本文提出了一种基于粗糙集的强大数据挖掘方法,可以同时实现特征选择,分类和知识表示。粗糙集具有良好的解释性,是一种流行的特征选择方法。但效率低,精度低是其主要缺点,限制了其应用能力。在本文中,对应于准确性,首先找到粗糙集的无效,因为过度装备,尤其是在处理噪声属性中,并为属性提出了一个稳健的测量,称为相对重要性。我们提出了“粗糙概念树”的概念用于知识表示和分类。在公共基准数据集上的实验结果表明,所提出的框架达到比七种流行或最先进的特征选择方法更高的精度。
translated by 谷歌翻译